Welcome to the PCI Security Standards Council’s blog series, The AI Exchange: Innovators in Payment Security. This special, ongoing feature of our PCI Perspectives blog offers a resource for payment security industry stakeholders to exchange information about how they are adopting and implementing artificial intelligence (AI) into their organizations.
In this edition of The AI Exchange, Block’s Security Governance Partner, Jacob Ansari, offers insight into how his company is using AI, and how this rapidly growing technology is shaping the future of payment security.
How have you most recently incorporated artificial intelligence within your organization?
Our organization is incorporating AI into nearly all aspects of our work: software engineering, machine learning, security operations, and marketing. Personally, I’ve been using AI for a number of third-party security efforts: AI agents are very fast at consuming documents provided by our third parties and analyzing them. I’m also using AI to help guide some on-site assessments I’m doing for third parties. For example, I’m not just assessing physical security controls; I’m using AI to evaluate their security policies and audit reports to tell me what I should expect to find. Then, I can assess them against what they claim they do.
What is the most significant change you’ve seen in your organization since AI-use has become so much more prevalent?
We’ve been using machine learning and other AI-adjacent technology for a long time, so for us the adoption of AI was a natural progression instead of a sharp pivot. That said, it’s increased some of our important directives like developer velocity materially and created an interesting array of both risks (e.g., AI access to certain data sets or functions, malicious insertion of commands into AI agents) and opportunities for security functions (e.g., AI to parse large data sets quickly).
How do you see AI evolving or impacting payment security in the future?
I think AI could be purposed, relatively quickly, into better and more localized fraud detection. Payment processors could use AI models to locate fraudulent transactions and potentially even prevent them from completing. Merchants could search their ecommerce sites for potentially malicious code that calls out to a hostile site.
What potential risks should organizations consider as AI becomes more integrated into payment security?
AI is imperfect and not entirely predictable. Asking an LLM the same question five times gives five different answers. So, expecting consistency in that regard from a single query is likely to cause problems. Also, in the end, AI agents run on platforms and communicate with other systems, and all of that is attack surface that faces the categories of threat with which we are familiar: authentication problems, transport security, supply chain attacks, and the like.
What advice would you provide for an organization just starting their journey into using AI?
Start small, consider what tasks could be easy wins, measure the usefulness of your efforts with clear, concrete metrics. Build on successes incrementally. Some organizations can make big, cultural shifts to embracing AI rapidly, but that’s probably not the default, and you want to be careful.
What AI trend (not limited to payments) are you most excited about?
Professionally, I’m enthused by the ability to find needles in haystacks: looking for relevant events in a sea of log data, automating security operations by parsing currently manual ticket queues and looking for common patterns, consuming lots of input documents and synthesizing a cogent analysis quickly. Those have so many useful applications.
Personally, I’ve been using ChatGPT to optimize my workout routines, and I’m sore enough that it seems to be working.


